对自我监督学习(SSL)的最新分析发现,以下以数据为中心的属性对于学习良好表示至关重要:对任务 - 无关紧要的语义的不变性,在某些潜在空间中的类别可分离性以及从增强样品中可恢复标签的类别。但是,鉴于它们的离散,非欧成功的性质,图形数据集和图SSL方法不太可能满足这些属性。这提出了一个问题:如何绘制SSL方法(例如对比度学习(CL))如何工作?为了系统地探究这个问题,我们在使用通用图扩展(GGAS)时对CL进行概括分析,重点是以数据为中心的属性。我们的分析对GGA的局限性以及与任务相关的增强的必要性产生了正式见解。正如我们经验表明的那样,GGA不会在共同基准数据集上引起与任务相关的不变性,这只会导致对天真的,未经训练的基线的边际收益。我们的理论激发了合成数据生成过程,该过程能够控制与任务相关的信息并拥有预定义的最佳增强。这种灵活的基准测试有助于我们确定高级增强技术(例如自动化方法)中未认可的限制。总体而言,我们的工作在经验和理论上都严格地对以数据为中心的属性对图形SSL的增强策略和学习范式的影响进行了严格的背景。
translated by 谷歌翻译
联合学习通常用于容易获得标签的任务(例如,下一个单词预测)。放松这种约束需要设计无监督的学习技术,该技术可以支持联合培训的理想特性:稳健性对统计/系统异质性,可伸缩性与参与者数量以及沟通效率。关于该主题的先前工作集中在直接扩展集中式的自我监督学习技术上,这些学习技术并非旨在具有上面列出的属性。为了解决这种情况,我们提出了乐团,这是一种新颖的无监督联盟学习技术,利用联邦的层次结构来协调分布式的聚类任务,并将客户数据对客户数据的全球始终划分为可区分的群集。我们显示了管弦乐队中的算法管道可确保在线性探针下良好的概括性能,从而使其在广泛的条件下胜过替代技术,包括异质性,客户次数,参与率和本地时期的变化。
translated by 谷歌翻译
图表分类具有生物信息学,社会科学,自动假新闻检测,Web文档分类等中的应用程序。在许多实践方案中,包括网络级应用程序,其中标签稀缺或难以获得,无人监督的学习是一种自然范式,但它交易表现。最近,对比学习(CL)使得无监督的计算机视觉模型能够竞争对抗监督。分析Visual CL框架的理论和实证工作发现,利用大型数据集和域名感知增强对于框架成功至关重要。有趣的是,图表CL框架通常会在使用较小数据的顺序的同时报告高性能,并且使用可能损坏图形的底层属性的域名增强(例如,节点或边缘丢弃,功能捕获)。通过这些差异的激励,我们寻求确定:(i)为什么现有的图形Cl框架尽管增加了增强和有限的数据; (ii)是否遵守Visual CL原理可以提高图形分类任务的性能。通过广泛的分析,我们识别图形数据增强和评估协议的缺陷实践,这些协议通常用于图形CL文献中,并提出了未来的研究和应用的改进的实践和理智检查。我们表明,在小型基准数据集上,图形神经网络的归纳偏差可以显着补偿现有框架的局限性。在采用相对较大的图形分类任务的研究中,我们发现常用的域名忽视增强的表现不佳,同时遵守Visual Cl中的原则可以显着提高性能。例如,在基于图形的文档分类中,可以用于更好的Web搜索,我们显示任务相关的增强提高了20%的准确性。
translated by 谷歌翻译
灾难性忘记破坏了深神网络(DNN)在诸如持续学习和终身学习等方案中的有效性。尽管已经提出了几种解决这个问题的方法,但有限的工作解释了为什么这些方法效果很好。本文的目的是更好地解释一种避免灾难性遗忘的普遍使用的技术:二次正则化。我们表明,二次正规化器可以通过在每次训练迭代时插值当前和先前的值来忘记过去的任务。在多次训练迭代中,这种插值操作降低了更重要的模型参数的学习率,从而最大程度地减少了它们的运动。我们的分析还揭示了二次正则化的两个缺点:(a)参数插值对训练超参数的依赖性通常会导致训练不稳定性,并且(b)(b)将较低的重要性分配到更深的层,这通常是DNNS中遗忘的地方。通过对操作顺序的简单修改,我们表明可以轻松避免这些缺点,从而在4.5%降低平均遗忘时的平均准确度增加6.2 \%。我们通过在不同的环境中培训2000多个模型来确认结果的鲁棒性。可在\ url {https://github.com/ekdeepslubana/qrforgetting}上获得代码
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM) , active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译
As language models have grown in parameters and layers, it has become much harder to train and infer with them on single GPUs. This is severely restricting the availability of large language models such as GPT-3, BERT-Large, and many others. A common technique to solve this problem is pruning the network architecture by removing transformer heads, fully-connected weights, and other modules. The main challenge is to discern the important parameters from the less important ones. Our goal is to find strong metrics for identifying such parameters. We thus propose two strategies: Cam-Cut based on the GradCAM interpretations, and Smooth-Cut based on the SmoothGrad, for calculating the importance scores. Through this work, we show that our scoring functions are able to assign more relevant task-based scores to the network parameters, and thus both our pruning approaches significantly outperform the standard weight and gradient-based strategies, especially at higher compression ratios in BERT-based models. We also analyze our pruning masks and find them to be significantly different from the ones obtained using standard metrics.
translated by 谷歌翻译
Neoplasms (NPs) and neurological diseases and disorders (NDDs) are amongst the major classes of diseases underlying deaths of a disproportionate number of people worldwide. To determine if there exist some distinctive features in the local wiring patterns of protein interactions emerging at the onset of a disease belonging to either of these two classes, we examined 112 and 175 protein interaction networks belonging to NPs and NDDs, respectively. Orbit usage profiles (OUPs) for each of these networks were enumerated by investigating the networks' local topology. 56 non-redundant OUPs (nrOUPs) were derived and used as network features for classification between these two disease classes. Four machine learning classifiers, namely, k-nearest neighbour (KNN), support vector machine (SVM), deep neural network (DNN), random forest (RF) were trained on these data. DNN obtained the greatest average AUPRC (0.988) among these classifiers. DNNs developed on node2vec and the proposed nrOUPs embeddings were compared using 5-fold cross validation on the basis of average values of the six of performance measures, viz., AUPRC, Accuracy, Sensitivity, Specificity, Precision and MCC. It was found that nrOUPs based classifier performed better in all of these six performance measures.
translated by 谷歌翻译